37 research outputs found

    From Traditional Library Instruction to Collaborative Instruction: Charting the Course Toward Evidence-Based Practice

    Get PDF
    This presentation will explore how a library’s academic liaison program led to a strong teaching partnership within an academic division. The goal of the liaison program is to provide an essential link between the University of Southern California’s Norris Medical Library and the University’s academic communities. This goal was achieved when a research support librarian teamed up with a professor of physical therapy to develop a curriculum for physical therapy students. The objective of this teaching alliance was for first-year doctor of physical therapy students to learn basic skills for Evidence Based Practice. The collaboration combined the librarian\u27s expertise in database searching, library instruction, and information literacy with the subject knowledge of the physical therapist. In developing the learning experience the professor of physical therapy requested assistance through the liaison program. Her requests, however, did not fit within traditional teaching methods used by the liaison. The librarian and the professor were able to maintain and enrich the interdisciplinary partnership through flexibility, communication, and cooperation. The collaborators will share how they overcame obstacles, learned to speak each other\u27s language, and quelled colleagues’ concerns that they were abandoning traditional teaching methods. The conceptual basis of the instructional model will be illustrated with specific teaching examples. A student-centered outcome measure to assess the efficacy of the teaching model will be presented

    Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)

    Get PDF
    Abstract Background The majority of reporting guidelines assist researchers to report consistent information concerning study design, however, they contain limited information for describing study interventions. Using a three-stage development process, the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and accompanying explanatory paper were developed to provide guidance for the reporting of educational interventions for evidence-based practice (EBP). The aim of this study was to complete the final development for the GREET checklist, incorporating psychometric testing to determine inter-rater reliability and criterion validity. Methods The final development for the GREET checklist incorporated the results of a prior systematic review and Delphi survey. Thirty-nine items, including all items from the prior systematic review, were proposed for inclusion in the GREET checklist. These 39 items were considered over a series of consensus discussions to determine the inclusion of items in the GREET checklist. The GREET checklist and explanatory paper were then developed and underwent psychometric testing with tertiary health professional students who evaluated the completeness of the reporting in a published study using the GREET checklist. For each GREET checklist item, consistency (%) of agreement both between participants and the consensus criterion reference measure were calculated. Criterion validity and inter-rater reliability were analysed using intra-class correlation coefficients (ICC). Results Three consensus discussions were undertaken, with 14 items identified for inclusion in the GREET checklist. Following further expert review by the Delphi panelists, three items were added and minor wording changes were completed, resulting in 17 checklist items. Psychometric testing for the updated GREET checklist was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate). The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings ranged from 19 % for item 4 Steps of EBP, to 94 % for item 16 Planned delivery. The overall consistency of agreement, for criterion validity (ICC 0.73) and inter-rater reliability (ICC 0.96), was good to almost perfect. Conclusion The final GREET checklist comprises 17 items which are recommended for reporting EBP educational interventions. Further validation of the GREET checklist with experts in EBP research and education is recommended

    A systematic review of how studies describe educational interventions for evidence-based practice:Stage 1 of the development of a reporting guideline

    Get PDF
    Abstract Background The aim of this systematic review was to identify which information is included when reporting educational interventions used to facilitate foundational skills and knowledge of evidence-based practice (EBP) training for health professionals. This systematic review comprised the first stage in the three stage development process for a reporting guideline for educational interventions for EBP. Methods The review question was ‘What information has been reported when describing educational interventions targeting foundational evidence-based practice knowledge and skills?’ MEDLINE, Academic Search Premier, ERIC, CINAHL, Scopus, Embase, Informit health, Cochrane Library and Web of Science databases were searched from inception until October - December 2011. Randomised and non-randomised controlled trials reporting original data on educational interventions specific to developing foundational knowledge and skills of evidence-based practice were included. Studies were not appraised for methodological bias, however, reporting frequency and item commonality were compared between a random selection of studies included in the systematic review and a random selection of studies excluded as they were not controlled trials. Twenty-five data items were extracted by two independent reviewers (consistency > 90%). Results Sixty-one studies met the inclusion criteria (n = 29 randomised, n = 32 non-randomised). The most consistently reported items were the learner’s stage of training, professional discipline and the evaluation methods used (100%). The least consistently reported items were the instructor(s) previous teaching experience (n = 8, 13%), and student effort outside face to face contact (n = 1, 2%). Conclusion This systematic review demonstrates inconsistencies in describing educational interventions for EBP in randomised and non-randomised trials. To enable educational interventions to be replicable and comparable, improvements in the reporting for educational interventions for EBP are required. In the absence of a specific reporting guideline, there are a range of items which are reported with variable frequency. Identifying the important items for describing educational interventions for facilitating foundational knowledge and skills in EBP remains to be determined. The findings of this systematic review will be used to inform the next stage in the development of a reporting guideline for educational interventions for EBP

    A Delphi survey to determine how educational interventions for evidence-based practice should be reported:Stage 2 of the development of a reporting guideline

    Get PDF
    BACKGROUND: Undertaking a Delphi exercise is recommended during the second stage in the development process for a reporting guideline. To continue the development for the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) a Delphi survey was undertaken to determine the consensus opinion of researchers, journal editors and educators in evidence-based practice (EBP) regarding the information items that should be reported when describing an educational intervention for EBP. METHODS: A four round online Delphi survey was conducted from October 2012 to March 2013. The Delphi panel comprised international researchers, educators and journal editors in EBP. Commencing with an open-ended question, participants were invited to volunteer information considered important when reporting educational interventions for EBP. Over three subsequent rounds participants were invited to rate the importance of each of the Delphi items using an 11 point Likert rating scale (low 0 to 4, moderate 5 to 6, high 7 to 8 and very high >8). Consensus agreement was set a priori as at least 80 per cent participant agreement. Consensus agreement was initially calculated within the four categories of importance (low to very high), prior to these four categories being merged into two (<7 and ≥7). Descriptive statistics for each item were computed including the mean Likert scores, standard deviation (SD), range and median participant scores. Mean absolute deviation from the median (MAD-M) was also calculated as a measure of participant disagreement. RESULTS: Thirty-six experts agreed to participate and 27 (79%) participants completed all four rounds. A total of 76 information items were generated across the four survey rounds. Thirty-nine items (51%) were specific to describing the intervention (as opposed to other elements of study design) and consensus agreement was achieved for two of these items (5%). When the four rating categories were merged into two (<7 and ≥7), 18 intervention items achieved consensus agreement. CONCLUSION: This Delphi survey has identified 39 items for describing an educational intervention for EBP. These Delphi intervention items will provide the groundwork for the subsequent consensus discussion to determine the final inclusion of items in the GREET, the first reporting guideline for educational interventions in EBP

    Protocol for development of the guideline for reporting evidence based practice educational interventions and teaching (GREET) statement

    Get PDF
    BACKGROUND: There are an increasing number of studies reporting the efficacy of educational strategies to facilitate the development of knowledge and skills underpinning evidence based practice (EBP). To date there is no standardised guideline for describing the teaching, evaluation, context or content of EBP educational strategies. The heterogeneity in the reporting of EBP educational interventions makes comparisons between studies difficult. The aim of this program of research is to develop the Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and an accompanying explanation and elaboration (E&E) paper. METHODS/DESIGN: Three stages are planned for the development process. Stage one will comprise a systematic review to identify features commonly reported in descriptions of EBP educational interventions. In stage two, corresponding authors of articles included in the systematic review and the editors of the journals in which these studies were published will be invited to participate in a Delphi process to reach consensus on items to be considered when reporting EBP educational interventions. The final stage of the project will include the development and pilot testing of the GREET statement and E&E paper. OUTCOME: The final outcome will be the creation of a Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and E&E paper. DISCUSSION: The reporting of health research including EBP educational research interventions, have been criticised for a lack of transparency and completeness. The development of the GREET statement will enable the standardised reporting of EBP educational research. This will provide a guide for researchers, reviewers and publishers for reporting EBP educational interventions

    Development and validation of the ACE tool: Assessing medical trainees' competency in evidence based medicine

    Get PDF
    BACKGROUND: While a variety of instruments have been developed to assess knowledge and skills in evidence based medicine (EBM), few assess all aspects of EBM - including knowledge, skills attitudes and behaviour - or have been psychometrically evaluated. The aim of this study was to develop and validate an instrument that evaluates medical trainees’ competency in EBM across knowledge, skills and attitude. METHODS: The ‘Assessing Competency in EBM’ (ACE) tool was developed by the authors, with content and face validity assessed by expert opinion. A cross-sectional sample of 342 medical trainees representing ‘novice’, ‘intermediate’ and ‘advanced’ EBM trainees were recruited to complete the ACE tool. Construct validity, item difficulty, internal reliability and item discrimination were analysed. RESULTS: We recruited 98 EBM-novice, 108 EBM-intermediate and 136 EBM-advanced participants. A statistically significant difference in the total ACE score was observed and corresponded to the level of training: on a 0-15-point test, the mean ACE scores were 8.6 for EBM-novice; 9.5 for EBM-intermediate; and 10.4 for EBM-advanced (p < 0.0001). Individual item discrimination was excellent (Item Discrimination Index ranging from 0.37 to 0.84), with internal reliability consistent across all but three items (Item Total Correlations were all positive ranging from 0.14 to 0.20). CONCLUSION: The 15-item ACE tool is a reliable and valid instrument to assess medical trainees’ competency in EBM. The ACE tool provides a novel assessment that measures user performance across the four main steps of EBM. To provide a complete suite of instruments to assess EBM competency across various patient scenarios, future refinement of the ACE instrument should include further scenarios across harm, diagnosis and prognosis

    Using Biofeedback to Reduce Spatiotemporal Asymmetry Impairs Dynamic Balance in People Post-Stroke

    Get PDF
    Background. People poststroke often walk with a spatiotemporally asymmetric gait, due in part to sensorimotor impairments in the paretic lower extremity. Although reducing asymmetry is a common objective of rehabilitation, the effects of improving symmetry on balance are yet to be determined. Objective. We established the concurrent validity of whole-body angular momentum as a measure of balance, and we determined if reducing step length asymmetry would improve balance by decreasing whole-body angular momentum. Methods. We performed clinical balance assessments and measured wholebody angular momentum during walking using a full-body marker set in a sample of 36 people with chronic stroke. We then used a biofeedback-based approach to modify step length asymmetry in a subset of 15 of these individuals who had marked asymmetry and we measured the resulting changes in whole-body angular momentum. Results. When participants walked without biofeedback, whole-body angular momentum in the sagittal and frontal plane was negatively correlated with scores on the Berg Balance Scale and Functional Gait Assessment supporting the validity of whole-body angular momentum as an objective measure of dynamic balance. We also observed that when participants walked more symmetrically, their wholebody angular momentum in the sagittal plane increased rather than decreased. Conclusions. Voluntary reductions of step length asymmetry in people poststroke resulted in reduced measures of dynamic balance. This is consistent with the idea that after stroke, individuals might have an implicit preference not to deviate from their natural asymmetry while walking because it could compromise their balance. Clinical Trials Number: NCT03916562

    Protocol for the Locomotor Experience Applied Post-stroke (LEAPS) trial: a randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Locomotor training using body weight support and a treadmill as a therapeutic modality for rehabilitation of walking post-stroke is being rapidly adopted into clinical practice. There is an urgent need for a well-designed trial to determine the effectiveness of this intervention.</p> <p>The objective of the Locomotor Experience Applied Post-Stroke (LEAPS) trial is to determine if there is a difference in the proportion of participants who recover walking ability at one year post-stroke when randomized to a specialized locomotor training program (LTP), conducted at 2- or 6-months post-stroke, or those randomized to a home based non-specific, low intensity exercise intervention (HEP) provided 2 months post-stroke. We will determine if the timing of LTP delivery affects gait speed at 1 year and whether initial impairment severity interacts with the timing of LTP. The effect of number of treatment sessions will be determined by changes in gait speed taken pre-treatment and post-12, -24, and -36 sessions.</p> <p>Methods/Design</p> <p>We will recruit 400 adults with moderate or severe walking limitations within 30 days of stroke onset. At two months post stroke, participants are stratified by locomotor impairment severity as determined by overground walking speed and randomly assigned to one of three groups: (a) LTP-Early; (b) LTP-Late or (c) Home Exercise Program -Early. The LTP program includes body weight support on a treadmill and overground training. The LTP and HEP interventions are delivered for 36 sessions over 12 weeks.</p> <p>Primary outcome measure include successful walking recovery defined as the achievement of a 0.4 m/s gait speed or greater by persons with initial severe gait impairment or the achievement of a 0.8 m/s gait speed or greater by persons with initial moderate gait impairment.</p> <p>LEAPS is powered to detect a 20% difference in the proportion of participants achieving successful locomotor recovery between the LTP groups and the HEP group, and a 0.1 m/s mean difference in gait speed change between the two LTP groups.</p> <p>Discussion</p> <p>The goal of this single-blinded, phase III randomized clinical trial is to provide evidence to guide post-stroke walking recovery programs.</p> <p>Trial registration</p> <p>NCT00243919.</p

    Validation of the modified Fresno Test: assessing physical therapists' evidence based practice knowledge and skills

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Health care educators need valid and reliable tools to assess evidence based practice (EBP) knowledge and skills. Such instruments have yet to be developed for use among physical therapists. The Fresno Test (FT) has been validated only among general practitioners and occupational therapists and does not assess integration of research evidence with patient perspectives and clinical expertise. The purpose of this study was to develop and validate a modified FT to assess EBP knowledge and skills relevant to physical therapist (PT) practice.</p> <p>Methods</p> <p>The FT was modified to include PT-specific content and two new questions to assess integration of patient perspectives and clinical expertise with research evidence. An expert panel reviewed the test for content validity. A cross-sectional cohort representing three training levels (EBP-novice students, EBP-trained students, EBP-expert faculty) completed the test. Two blinded raters, not involved in test development, independently scored each test. Construct validity was assessed through analysis of variance for linear trends among known groups. Inter and intra-rater reliability, internal consistency, item discrimination index, item total correlation, and difficulty were analyzed.</p> <p>Results</p> <p>Among 108 participants (31 EBP-novice students, 50 EBP-trained students, and 27 EBP-expert faculty), there was a statistically significant (p < 0.0001) difference in total score corresponding to training level. Total score reliability and psychometric properties of items modified for discipline-specific content were excellent [inter-rater (ICC (2,1)] = 0.91); intra-rater (ICC (2,1)] = 0.95, 0.96)]. Cronbach's α was 0.78. Of the two new items, only one had strong psychometric properties.</p> <p>Conclusions</p> <p>The 13-item modified FT presented here is a valid, reliable assessment of physical therapists' EBP knowledge and skills. One new item assesses integration of patient perspective as part of the EBP model. Educators and researchers may use the 13-item modified FT to evaluate PT EBP curricula and physical therapists' EBP knowledge and skills.</p
    corecore